This script here: https://github.com/NeuroDataDesign/seelviz/blob/gh-pages/Jupyter/NIFTI%20(.nii)%20Ilastik%20Membrane%20Detection.ipynb (copied below) generates TIFF slices for each plane from Fear197 after downloading from ndreg/converting the numpy array into individual planes.
In [ ]:
import os
import numpy as np
from PIL import Image
import nibabel as nib
import scipy.misc
## This .nii was generated from the compute cloud
TokenName = 'Fear197ds10.nii'
img = nib.load(TokenName)
## Sanity check for shape
img.shape
## Convert into np array (or memmap in this case)
data = img.get_data()
print data.shape
print type(data)
plane = 0;
##Iterate through all planes to get slices
for plane in range(data.shape[0]):
## Convert memmap array into ndarray for toimage process
output = np.asarray(data[plane])
## Save as TIFF for Ilastik
scipy.misc.toimage(output).save('outfile' + TokenName + str(plane) + '.tiff')
From there, given these TIFF slices, we can visualize using ImageJ as a 3D TIFF stack. By using the loading function in Ilastik, we can also directlyi mport this 3D TIFF stack for object classification.
From there, we can use the same process of selecting regions using pixel + object classification to generate an object classifier. Some images are shown below.
Using this, we can generate a classifier that we can then use with the headless Ilastik display to run analytics on the data.
Main downsides to this: pixel classification for object classification not accurate given the manual task of setting up and selecting the individual pixels. Time intensive and manually taxing, especially on the tiny slices I generated.
In [ ]: